- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0003000001000000
- More
- Availability
-
31
- Author / Contributor
- Filter by Author / Creator
-
-
Huang, Audrey (3)
-
Jiang, Nan (3)
-
Caballero, Camila (1)
-
Cohodes, Emily M (1)
-
Gee, Dylan G (1)
-
Ghavamzadeh, Mohammad (1)
-
Gold, Gillian (1)
-
Haberman, Jason T (1)
-
Hodges, Hopewell R (1)
-
Huang, Audrey Y (1)
-
Huang, Baihe (1)
-
Keding, Taylor J (1)
-
Kribakaran, Sahana (1)
-
Lee, Jason (1)
-
McCauley, Sarah (1)
-
Odriozola, Paola (1)
-
Petrik, Marek (1)
-
Pierre, Jasmyne C (1)
-
Sisk, Lucinda M (1)
-
Talton, Ashley (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Sisk, Lucinda M; Keding, Taylor J; Cohodes, Emily M; McCauley, Sarah; Pierre, Jasmyne C; Odriozola, Paola; Kribakaran, Sahana; Haberman, Jason T; Zacharek, Sadie J; Hodges, Hopewell R; et al (, Biological Psychiatry: Cognitive Neuroscience and Neuroimaging)Free, publicly-accessible full text available February 1, 2026
-
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring DistributionsHuang, Audrey; Jiang, Nan (, Advances in neural information processing systems)Off-policy evaluation often refers to two related tasks: estimating the expected return of a policy and estimating its value function (or other functions of interest, such as density ratios). While recent works on marginalized importance sampling (MIS) show that the former can enjoy provable guarantees under realizable function approximation, the latter is only known to be feasible under much stronger assumptions such as prohibitively expressive discriminators. In this work, we provide guarantees for off-policy function estimation under only realizability, by imposing proper regularization on the MIS objectives. Compared to commonly used regularization in MIS, our regularizer is much more flexible and can account for an arbitrary user-specified distribution, under which the learned function will be close to the ground truth. We provide exact characterization of the optimal dual solution that needs to be realized by the discriminator class, which determines the data-coverage assumption in the case of value-function learning. As another surprising observation, the regularizer can be altered to relax the data-coverage requirement, and completely eliminate it in the ideal case with strong side information.more » « less
-
Zhan, Wenhao; Huang, Baihe; Huang, Audrey; Jiang, Nan; Lee, Jason (, Proceedings of Thirty Fifth Conference on Learning Theory)Sample-efficiency guarantees for offline reinforcement learning (RL) often rely on strong assumptions on both the function classes (e.g., Bellman-completeness) and the data coverage (e.g., all-policy concentrability). Despite the recent efforts on relaxing these assumptions, existing works are only able to relax one of the two factors, leaving the strong assumption on the other factor intact. As an important open problem, can we achieve sample-efficient offline RL with weak assumptions on both factors? In this paper we answer the question in the positive. We analyze a simple algorithm based on the primal-dual formulation of MDPs, where the dual variables (discounted occupancy) are modeled using a density-ratio function against offline data. With proper regularization, the algorithm enjoys polynomial sample complexity, under only realizability and single-policy concentrability. We also provide alternative analyses based on different assumptions to shed light on the nature of primal-dual algorithms for offline RL.more » « less
An official website of the United States government

Full Text Available